61 research outputs found

    Non-Rigid Registration between Histological and MR Images of the Prostate: A Joint Segmentation and Registration Framework

    Get PDF
    This paper presents a 3D non-rigid registration algorithm between histological and MR images of the prostate with cancer. To compensate for the loss of 3D integrity in the histology sectioning process, series of 2D histological slices are first reconstructed into a 3D histological volume. After that, the 3D histology-MRI registration is obtained by maximizing a) landmark similarity and b) cancer region overlap between the two images. The former aims to capture distortions at prostate boundary and internal bloblike structures; and the latter aims to capture distortions specifically at cancer regions. In particular, landmark similarities, the former, is maximized by an annealing process, where correspondences between the automatically-detected boundary and internal landmarks are iteratively established in a fuzzy-to-deterministic fashion. Cancer region overlap, the latter, is maximized in a joint cancer segmentation and registration framework, where the two interleaved problems – segmentation and registration – inform each other in an iterative fashion. Registration accuracy is established by comparing against human-rater-defined landmarks and by comparing with other methods. The ultimate goal of this registration is to warp the histologically-defined cancer ground truth into MRI, for more thoroughly understanding MRI signal characteristics of the prostate cancerous tissue, which will promote the MRI-based prostate cancer diagnosis in the future studies

    Accuracy of Segment-Anything Model (SAM) in medical image segmentation tasks

    Full text link
    The segment-anything model (SAM), was introduced as a fundamental model for segmenting images. It was trained using over 1 billion masks from 11 million natural images. The model can perform zero-shot segmentation of images by using various prompts such as masks, boxes, and points. In this report, we explored (1) the accuracy of SAM on 12 public medical image segmentation datasets which cover various organs (brain, breast, chest, lung, skin, liver, bowel, pancreas, and prostate), image modalities (2D X-ray, histology, endoscropy, and 3D MRI and CT), and health conditions (normal, lesioned). (2) if the computer vision foundational segmentation model SAM can provide promising research directions for medical image segmentation. We found that SAM without re-training on medical images does not perform as accurately as U-Net or other deep learning models trained on medical images.Comment: Technical Repor

    Simultaneous Estimation and Segmentation of T1 Map for Breast Parenchyma Measurement

    Get PDF
    Breast density has been shown to be an independent risk factor for breast cancer. In order to segment breast parenchyma, which has been proposed as a biomarker of breast cancer risk, we present an integrated algorithm for simultaneous T1 map estimation and segmentation, using a series of magnetic resonance (MR) breast images. The advantage of using this algorithm is that the step of T1 map estimation (E-Step) and the step of T1 map based tissue segmentation (S-Step) can benefit each other. Since the estimated T1 map can be noisy due to the complexity of T1 estimation method, the tentative tissue segmentation results from S-Step can help perform the edge-preserving smoothing on the estimated T1 map in E-Step, thus removing noises and also preserving tissue boundaries. On the other hand, the improved estimation of T1 map from E-Step can help segment breast tissues in a more accurate and less noisy way. Therefore, by repeating these steps, we can simultaneously obtain better results for both T1 map estimation and segmentation. Experimental results show the effectiveness of the proposed algorithm in breast tissue segmentation and parenchyma volume measurement

    Sampling the spatial patterns of cancer: Optimized biopsy procedures for estimating prostate cancer volume and Gleason Score

    Get PDF
    Prostate biopsy is the current gold-standard procedure for prostate cancer diagnosis. Existing prostate biopsy procedures have been mostly focusing on detecting cancer presence. However, they often ignore the potential use of biopsy to estimate cancer volume (CV) and Gleason Score (GS, a cancer grade descriptor), the two surrogate markers for cancer aggressiveness and the two crucial factors for treatment planning. To fill up this vacancy, this paper assumes and demonstrates that, by optimally sampling the spatial patterns of cancer, biopsy procedures can be specifically designed for estimating CV and GS. Our approach combines image analysis and machine learning tools in an atlas-based population study that consists of three steps. First, the spatial distributions of cancer in a patient population are learned, by constructing statistical atlases from histological images of prostate specimens with known cancer ground truths. Then, the optimal biopsy locations are determined in a feature selection formulation, so that biopsy outcomes (either cancer presence or absence) at those locations could be used to differentiate, at the best rate, between the existing specimens having different (high vs. low) CV/GS values. Finally, the optimized biopsy locations are utilized to estimate whether a new-coming prostate cancer patient has high or low CV/GS values, based on a binary classification formulation. The estimation accuracy and the generalization ability are evaluated by the classification rates and the associated receiver-operating-characteristic (ROC) curves in cross validations. The optimized biopsy procedures are also designed to be robust to the almost inevitable needle displacement errors in clinical practice, and are found to be robust to variations in the optimization parameters as well as the training populations

    Detecting Mutually-Salient Landmark Pairs with MRF Regularization

    Get PDF
    In this paper, we present a framework for extracting mutually-salient landmark pairs for registration. Traditional methods detect landmarks one-by-one and separately in two images. Therefore, the detected landmarks might inherit low discriminability and are not necessarily good for matching. In contrast, our method detects landmarks pair-by-pair across images, and those pairs are required to be mutually-salient, i.e., uniquely corresponding to each other. The second merit of our framework is that, instead of finding individually optimal correspondence, which is a local approach and could cause self-intersection of the resultant deformation, our framework adopts a Markov-random-field (MRF)-based spatial arrangement to select the globally optimal landmark pairs. In this way, the geometric consistency of the correspondences is maintained and the resultant deformations are relatively smooth and topology-preserving. Promising experimental validation through a radiologist’s evaluation of the established correspondences is presented

    Registering Histological and MR Images of Prostate for Image-based Cancer Detection

    Get PDF
    Rationale and Objectives Needle biopsy is currently the only way to confirm prostate cancer. To increase prostate cancer diagnostic rate, needles are expected to be deployed at suspicious cancer locations. High contrast MR imaging provides a powerful tool for detecting suspicious cancerous tissues. To do this, MR appearances of cancerous tissue should be characterized and learned from a sufficient number of prostate MR images with known cancer information. However, ground-truth cancer information is only available in histological images. Therefore, it is necessary to warp ground-truth cancerous regions in histological images to MR images by a registration procedure. The objective of this paper is to develop a registration technique for aligning histological and MR images of the same prostate. Material and Methods Five pairs of histological and T2-weighted MR images of radical prostatectomy specimens are collected. For each pair, registration is guided by two sets of correspondences that can be reliably established on prostate boundaries and internal salient blob-like structures of histological and MR images. Results Our developed registration method can accurately register histological and MR images. It yields results comparable to manual registration, in terms of landmark distance and volume overlap. It also outperforms both affine registration and boundary-guided registration methods. Conclusions We have developed a novel method for deformable registration of histological and MR images of the same prostate. Besides the collection of ground-truth cancer information in MR images, the method has other potential applications. An automatic, accurate registration of histological and MR images actually builds a bridge between in vivo anatomical information and ex vivo pathological information, which is valuable for various clinical studies

    Probabilistic Segmentation of Brain Tumors Based on Multi-Modality Magnetic Resonance Images

    Get PDF
    In this paper, multi-modal Magnetic Resonance (MR) images are integrated into a tissue profile that aims at differentiating tumor components, edema and normal tissue. This is achieved by a tissue classification technique that learns the appearance models of different tissue types based on training samples identified by an expert and assigns tissue labels to each voxel. These tissue classifiers produce probabilistic tissue maps reflecting imaging characteristics of tumors and surrounding tissues that may be employed to aid in diagnosis, tumor boundary delineation, surgery and treatment planning. The main contributions of this work are: 1) conventional structural MR modalities are combined with diffusion tensor imaging data to create an integrated multimodality profile for brain tumors, and 2) in addition to the tumor components of enhancing and non-enhancing tumor types, edema is also characterized as a separate class in our framework. Classification performance is tested on 22 diverse tumor cases using cross-validation

    Cascaded Segmentation of Brain Tumors Using Multi-Modality MR Profiles

    Get PDF
    The accurate identification of the brain tumor boundary and its components is crucial for their effective treatment, but is rendered challenging due to the large variations in tumor size, shape and location, and the inherent inhomogeneity, presence of edema, and infiltration into surrounding tissue. Most of the existing tumor segmentation methods use supervised or unsupervised tissue classification based on the conventional T1 and/or T2 enhanced images and show promising results in differentiating tumor and normal tissues [1-3]. However, perhaps due to the lack of enough MR modalities that could provide a more distinctive appearance signature of each tissue type, these methods have difficulty in differentiating tumor components (enhancing or non-enhancing) and edema. These issues are alleviated by the framework proposed in this paper, that incorporates multi-modal MR images, including the conventional structural MR images and the diffusion tensor imaging (DTI) related maps to create tumor tissue profiles that provide better differentiation between tumor components, edema, and normal tissue types. Tissue profiles are created using pattern classification techniques that learn the multimodal appearance signature of each tissue type by training on expert identified training samples from several patients. The novel use of DTI in the multi-modality framework, helps incorporate the information that tumors grow along white matter tracts [4]. In addition to distinguishing between enhancing and non-enhancing tumors, our framework is also able to identify edema as a separate class, contributing to the solution of tumor boundary detection problem. Tumor segmentation and probabilistic tissue maps generated as a result of applying the classifiers on a new patient reflect the subtle characterizations of tumors and surrounding tissues, and thus could be used to aid tumor diagnosis, tumor boundary identification and tumor surgery planning
    • …
    corecore